-
Notifications
You must be signed in to change notification settings - Fork 28
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
GH-800: Drain the Pipeline #450
base: master
Are you sure you want to change the base?
Conversation
…_keys_for_terminal_response
…rue passes for the first time
…n_signal_when_last_data_is_true
…wn_signal_when_last_data_is_true
…rue is working fine
…e_received_by_stream_handler_pool
…lled_integration is passing
…_up_dead_streams()" This reverts commit ae08930.
This reverts commit d81c2f0.
This reverts commit 1880006.
This reverts commit 4f8c141.
This reverts commit 31db7c4.
…ata_is_true is passing" This reverts commit 54040ae.
This reverts commit f94c8d3.
This reverts commit 3b387a7.
This reverts commit 2ecfebc.
…tical integration test passing
…ssage is improved
@@ -58,6 +58,8 @@ use tokio::prelude::Future; | |||
pub const CRASH_KEY: &str = "PROXYSERVER"; | |||
pub const RETURN_ROUTE_TTL: Duration = Duration::from_secs(120); | |||
|
|||
pub const STREAM_KEY_PURGE_DELAY: Duration = Duration::from_secs(5); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Think about this value. It's not only the time allowed for straggling packets to make it to their destination, it's also the time allowed for evil exit Nodes to pad connections with garbage that they'll get paid for. Remember, the browser has already shut this stream down, so the user will never see any of the straggling packets or their consequences. Therefore an evil exit Node, once it gets notification that a stream is to be shut down, can immediately start sending big packets full of random trash back to us, and we'll accept and pay for them for this long. This parameter needs fairly careful tuning for the sake of security.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
A suggestion might be to put in logging logic (if it doesn't already exist) to log whenever we receive one of these straggler packets that we're going to throw away, and then the testers can provide us logs to look at so that we can see how late stragglers are in real life, and tune this parameter to a balance between getting banned as a deadbeat and paying for trash from evil exit Nodes.
@@ -603,6 +633,10 @@ impl ProxyServer { | |||
} | |||
|
|||
fn purge_stream_key(&mut self, stream_key: &StreamKey) { | |||
debug!( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We should probably check here to see if the stream key has already been purged (by a previous delayed message) and abort if it has.
debug!( | ||
self.logger, | ||
"Retiring stream key {}: no more data", &stream_key | ||
); | ||
let _ = self.keys_and_addrs.remove_a(stream_key); | ||
let _ = self.stream_key_routes.remove(stream_key); | ||
let _ = self.tunneled_hosts.remove(stream_key); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Suggestion: Combine these last two HashMaps
into one, with the StreamKey
indexing a single structure that has both route and tunneled host in it. Add a third field that's a boolean flag that says whether the stream is in the process of waiting for stragglers. Use this flag in handle_client_response_payload()
above to make sure each dying stream gets only one StreamKeyPurge
message, no matter how many straggling packets come in for it.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Further suggestion: That flag that makes sure each StreamKey
gets only one StreamKeyPurge
message? Make it a struct
that contains both that boolean flag and also two timestamps: one that carries the instant the terminal ExpiredCoresPackage
was received, and one that carries the instant the most recent straggler was received. That way, when the StreamKeyPurge
message is finally processed, you can create a log saying something like, "For StreamKey , stragglers kept arriving for milliseconds after it was ordered closed." That will help immensely with tuning the delay for the real world.
delay: Duration::from_millis(stream_key_purge_delay_in_millis + offset_in_millis), | ||
}) | ||
.unwrap(); | ||
system.run(); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If this test fails, does the test panic, or does this system.run()
just return an unsuccessful code? If the latter, this test will always pass. If you're absolutely sure that the test will panic if it fails, okay, but if it were me, I'd modify one of those assert!()
s (maybe by adding or removing a !
) to make sure the test fails and run it to see if I was right.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Also, this is a very clever test. I hope it's not too clever for folks to figure out. If you think it might be, a few judicious comments might help clarify it. Maybe a banner comment at the top with a little ASCII timeline diagram explaining what you're doing with those -100ms and +100ms messages.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It shows this once it fails: assertion failed: proxy_server.keys_and_addrs.is_empty()
.
No description provided.